14 research outputs found

    One hundred ways to process time, frequency, rate and scale in the central auditory system: a pattern-recognition meta-analysis

    Get PDF
    International audienceThe mammalian auditory system extracts features from the acoustic environment based on the responses of spatially distributed sets of neurons in the subcortical and cortical auditory structures. The characteristic responses of these neurons (linearly approximated by their spectro-temporal receptive fields, or STRFs) suggest that auditory representations are formed, as early as in the inferior colliculi, on the basis of a time, frequency, rate (temporal modulations) and scale (spectral modulations) analysis of sound. However, how these four dimensions are integrated and processed in subsequent neural networks remains unclear. In this work, we present a new methodology to generate computational insights into the functional organization of such processes. We first propose a systematic framework to explore more than a hundred different computational strategies proposed in the literature to process the output of a generic STRF model. We then evaluate these strategies on their ability to compute perceptual distances between pairs of environmental sounds. Finally, we conduct a meta-analysis of the dataset of all these algorithms' accuracies to examine whether certain combinations of dimensions and certain ways to treat such dimensions are, on the whole, more computationally effective than others. We present an application of this methodology to a dataset of ten environmental sound categories, in which the analysis reveals that (1) models are most effective when they organize STRF data into frequency groupings—which is consistent with the known tonotopic organization of receptive fields in auditory structures-, and that (2) models that treat STRF data as time series are no more effective than models that rely only on summary statistics along time—which corroborates recent experimental evidence on texture discrimination by summary statistics

    Towards the Design of a Natural User Interface for Performing and Learning Musical Gestures

    Get PDF
    AbstractA large variety of musical instruments, either acoustical or digital, are based on a keyboard scheme. Keyboard instruments can produce sounds through acoustic means but they are increasingly used to control digital sound synthesis processes with nowadays music. Interestingly, with all the different possibilities of sonic outcomes, the input remains a musical gesture. In this paper we present the conceptualization of a Natural User Interface (NUI), named the Intangible Musical Instrument (IMI), aiming to support both learning of expert musical gestures and performing music as a unified user experience. The IMI is designed to recognize metaphors of pianistic gestures, focusing on subtle uses of fingers and upper-body. Based on a typology of musical gestures, a gesture vocabulary has been created, hierarchized from basic to complex. These piano-like gestures are finally recognized and transformed into sounds

    Music gestural skills development engaging teachers, learners and expert performers

    Get PDF
    International audienceThis article presents a platform for learning theoretical knowledge and practical motor skills of musical gestures by combining functionalities of Learning Management Systems (LMS) and Serious Gaming (SG). The teacher designs his/her educational scenario that can be articulated by both theoretical and practical activities. The learner accesses online multimedia courses by using his/her LMS client which can be a computer, tablet orsmartphone and the serious game by using his/her computer and the motion capture sensors. During practicing, his/her gestures are compared in real-time with the expert gestures and s/he is evaluated both in terms of correct fingerings and kinematics. Finally, the platform offers a single profile for the learner for theoretical and practical activities

    Modeling, recognition of finger gestures and upper-body movements for musical interaction design

    No full text
    Cette thèse présente un nouvel instrument de musique, appelé Embodied Musical Instrument (EMI), qui a été conçu pour répondre à deux problèmes : comment pouvons-nous “capturer” et modéliser des gestes musicaux et comment utiliser ce modèle afin de contrôler des paramètres de synthèse sonore de manière expressive. L'EMI est articulé autour d'une stratégie de “mapping explicite” qui s'inspire de techniques du jeu pianistique, mais aussi du potentiel gestuel de certains objets. Le système que nous proposons utilise des caméras 3D et des algorithmes de vision par ordinateur afin de libérer le geste de dispositifs intrusifs, tout en facilitant le processus de capture et de performance. Nous utilisons différentes caméras 3D pour le suivi de geste et exploitons pleinement leur potentiel en ajoutant une plaque transparente. Cette plaque créer un seuil de détection pour les doigtés, mais fournit aussi une rétroaction haptique, simple mais nécessaire. Nous avons examiné les gestes des doigts par rapport à la surface de l’EMI et nous avons décomposé leurs trajectoires en phases élémentaires, ce qui nous a permis de modéliser et d'analyser des gestes de type pianistique. Une étude préliminaire sur les gestes musicaux a porté notre intérêt non seulement sur les gestes “effectifs” opérés par les doigts - dans le cas des instruments à claviers - mais aussi sur les gestes “d’accompagnements” et “figuratifs”, caractérisés principalement par les mouvements des bras et de la tête. Par conséquent, nous distinguons deux niveaux d'interactions, délimités par deux volumes d’interaction: un volume englobant dit “micro” inclut les micro-gestes opérés par les doigts, tandis qu’un volume englobant dit “macro” comprend des mouvements plus importants du haut du corps. À partir de cela, nous étendons notre modèle de jeu pianistique à un paradigme d'interaction 3D, où les paramètres musicaux de haut niveau, tels que les effets sonores (filtres, réverbération, spatialisation), peuvent être contrôlés en continu par des mouvements du haut du corps. Nous avons exploré un ensemble de scénarios réels pour cet instrument, à savoir la pratique, la composition et la performance. L'EMI introduit un cadre pour la capture et l'analyse de gestes musicaux spécifiques. Une analyse hors ligne des fonctionnalités gestuelles peut révéler des tendances, des défauts et des spécificités d'une interprétation musicale. Plusieurs œuvres musicales ont été créées pour l’EMI, réalisées en solo, accompagnées d'un quatuor à cordes, et d’autres ensembles musicaux. Un retour d'expérience montre que l'instrument peut être facilement enseigné - sinon de manière autodidacte - grâce aux paradigmes gestuels intuitifs tirés de gestes pianistiques et d'autres gestes métaphoriques.This thesis presents a novel musical instrument, named the Embodied Musical Instrument (EMI), which has been designed to answer two problems : how can we capture and model musical gestures and how can we use this model to control sound synthesis parameters expressively. The EMI is articulated around an explicit mapping strategy, which draws inspiration from the piano-playing techniques and other objects’ affordances.  The system we propose makes use of 3D cameras and computer vision algorithms in order to free the gesture from intrusive devices and ease the process of capture and performance, while enabling precise and reactive tracking of the fingertips and upper-body. Having recourse to different 3D cameras tracking solutions, we fully exploit their potential by adding a transparent sheet, which serves as a detection threshold for fingerings as well as bringing a simple but essential haptic feedback. We examined finger movements while tapping on the surface of the EMI and decomposed their trajectories into essential phases, which enabled us to model and analyse piano-like gestures. A preliminary study of generic musical gestures directed our interest not only on the effective gestures operated by the fingers - in the case of keyboard instruments - but also on the accompanying and figurative gestures, which are mostly characterised by the arms and head movements. Consequently, we distinguish two level of interactions, delimited by two bounding volumes. The micro bounding volume includes the micro-gestures operated with the fingers, while the macro bounding volume includes larger movements with the upper-body. Building from this, we extend our piano-like model to a 3D interaction paradigm, where higher-level musical parameters, such as sound effects, can be controlled continuously by upper-body free movements. We explored a set of real-world scenarios for this instrument, namely practice, composition and performance. The EMI introduces a framework for capture and analysis, of specific musical gestures. An off-line analysis of gesture features can reveal trends, faults and musical specificities of an interpret. Several musical works have been created and performed live; either solo or accompanied by a string quartet, revealing the body gesture specificities through the sounds it synthesises. User experience feedback shows that the instrument can be easily taught - if not self-taught - thanks to the intuitive gesture paradigms drawn from piano-like gestures and other metaphorical gestures

    Modélisation, reconnaissance du geste des doigts et du haut du corps dans le design d’interaction musicale

    No full text
    This thesis presents a novel musical instrument, named the Embodied Musical Instrument (EMI), which has been designed to answer two problems : how can we capture and model musical gestures and how can we use this model to control sound synthesis parameters expressively. The EMI is articulated around an explicit mapping strategy, which draws inspiration from the piano-playing techniques and other objects’ affordances.  The system we propose makes use of 3D cameras and computer vision algorithms in order to free the gesture from intrusive devices and ease the process of capture and performance, while enabling precise and reactive tracking of the fingertips and upper-body. Having recourse to different 3D cameras tracking solutions, we fully exploit their potential by adding a transparent sheet, which serves as a detection threshold for fingerings as well as bringing a simple but essential haptic feedback. We examined finger movements while tapping on the surface of the EMI and decomposed their trajectories into essential phases, which enabled us to model and analyse piano-like gestures. A preliminary study of generic musical gestures directed our interest not only on the effective gestures operated by the fingers - in the case of keyboard instruments - but also on the accompanying and figurative gestures, which are mostly characterised by the arms and head movements. Consequently, we distinguish two level of interactions, delimited by two bounding volumes. The micro bounding volume includes the micro-gestures operated with the fingers, while the macro bounding volume includes larger movements with the upper-body. Building from this, we extend our piano-like model to a 3D interaction paradigm, where higher-level musical parameters, such as sound effects, can be controlled continuously by upper-body free movements. We explored a set of real-world scenarios for this instrument, namely practice, composition and performance. The EMI introduces a framework for capture and analysis, of specific musical gestures. An off-line analysis of gesture features can reveal trends, faults and musical specificities of an interpret. Several musical works have been created and performed live; either solo or accompanied by a string quartet, revealing the body gesture specificities through the sounds it synthesises. User experience feedback shows that the instrument can be easily taught - if not self-taught - thanks to the intuitive gesture paradigms drawn from piano-like gestures and other metaphorical gestures.Cette thèse présente un nouvel instrument de musique, appelé Embodied Musical Instrument (EMI), qui a été conçu pour répondre à deux problèmes : comment pouvons-nous “capturer” et modéliser des gestes musicaux et comment utiliser ce modèle afin de contrôler des paramètres de synthèse sonore de manière expressive. L'EMI est articulé autour d'une stratégie de “mapping explicite” qui s'inspire de techniques du jeu pianistique, mais aussi du potentiel gestuel de certains objets. Le système que nous proposons utilise des caméras 3D et des algorithmes de vision par ordinateur afin de libérer le geste de dispositifs intrusifs, tout en facilitant le processus de capture et de performance. Nous utilisons différentes caméras 3D pour le suivi de geste et exploitons pleinement leur potentiel en ajoutant une plaque transparente. Cette plaque créer un seuil de détection pour les doigtés, mais fournit aussi une rétroaction haptique, simple mais nécessaire. Nous avons examiné les gestes des doigts par rapport à la surface de l’EMI et nous avons décomposé leurs trajectoires en phases élémentaires, ce qui nous a permis de modéliser et d'analyser des gestes de type pianistique. Une étude préliminaire sur les gestes musicaux a porté notre intérêt non seulement sur les gestes “effectifs” opérés par les doigts - dans le cas des instruments à claviers - mais aussi sur les gestes “d’accompagnements” et “figuratifs”, caractérisés principalement par les mouvements des bras et de la tête. Par conséquent, nous distinguons deux niveaux d'interactions, délimités par deux volumes d’interaction: un volume englobant dit “micro” inclut les micro-gestes opérés par les doigts, tandis qu’un volume englobant dit “macro” comprend des mouvements plus importants du haut du corps. À partir de cela, nous étendons notre modèle de jeu pianistique à un paradigme d'interaction 3D, où les paramètres musicaux de haut niveau, tels que les effets sonores (filtres, réverbération, spatialisation), peuvent être contrôlés en continu par des mouvements du haut du corps. Nous avons exploré un ensemble de scénarios réels pour cet instrument, à savoir la pratique, la composition et la performance. L'EMI introduit un cadre pour la capture et l'analyse de gestes musicaux spécifiques. Une analyse hors ligne des fonctionnalités gestuelles peut révéler des tendances, des défauts et des spécificités d'une interprétation musicale. Plusieurs œuvres musicales ont été créées pour l’EMI, réalisées en solo, accompagnées d'un quatuor à cordes, et d’autres ensembles musicaux. Un retour d'expérience montre que l'instrument peut être facilement enseigné - sinon de manière autodidacte - grâce aux paradigmes gestuels intuitifs tirés de gestes pianistiques et d'autres gestes métaphoriques

    Embodiment on a 3D tabletop musical instrument

    No full text
    International audienc

    A tabletop instrument for manipulation of sound morphologies with hands, fingertips and upper-body

    No full text
    International audienceWe present a musical instrument, named the Embodied Musical Instrument (EMI) which allows musicians to perform free gestures with the upper–body including hands and fingers thanks to 3D vision sensors, arranged around the tabletop. 3D interactive spaces delimit the boundaries in which the player performs metaphorical gestures in order to play with sound synthesis engines. A physical-based sound synthesis engine and a sampler have been integrated in the system in order to manipulate sound morphologies in the context of electro-acoustic and electronic composition

    STRF: a signal processing primer

    No full text
    <p>A one-pager describing the signal processing workflow of Shamma et al.'s STRF model. The STRF (spectro-temporal modulation fields) model is a computational simulation of the signal processing done by sensory neurons of the primary auditory cortex by mammals. It is based on in-vivo physiological measurement in ferrets, and has been applied to model human's perception of musical timbre. This tutorial targets signal processing people with little to no background in computational neuroscience (e.g. Music Information Retrieval researchers). </p

    A Natural User Interface for Gestural Expression and Emotional Elicitation to access the Musical Intangible Cultural Heritage

    No full text
    International audienceThis paper describes a prototype natural user interface, named the Intangible Musical Instrument, which aims to facilitate access to the knowledge of the performers that constitutes musical Intangible Cultural Heritage, using off-the-shelf motion capturing that is easily accessed by the public at large. This prototype is able to capture, model and recognize musical gestures (upper body including fingers) as well as to sonify them. The emotional status of the performer affects the sound parameters at the synthesis level. Intangible Musical Instrument is able to support both learning and performing/composing by providing to the user not only intuitive gesture control but also a unique user experience. In addition, the first evaluation of the Intangible Musical Instrument is presented, in which all the functionalities of the system are assessed. Overall, the results with respect to this evaluation were very promising

    Reproductive strategy as a piece of the biogeographic puzzle: a case study using Antarctic sea stars (Echinodermata, Asteroidea).

    No full text
    13 pagesInternational audienceAimTo describe and analyse asteroid biogeographic patterns in the Southern Ocean (SO) and test whether reproductive strategy (brooder versus broadcaster) can explain distribution patterns at the scale of the entire class. We hypothesize that brooding and broadcasting species display different biogeographic patterns.LocationSouthern Ocean, south of 45 °S.MethodsOver 14,000 asteroid occurrences are analysed using bootstrapped spanning network (BSN), non-metrical multidimensional scaling (nMDS) and clustering to uncover the spatial structure of faunal similarities among 25 bioregions.ResultsMain biogeographic patterns are congruent with previous works based on other taxa and highlight the isolation of New Zealand, the high richness in the Scotia Arc area particularly of brooding species, an East/West Antarctic differentiation, and the faunal affinities between South America and sub-Antarctic Islands. Asteroids show lower endemism levels than previously reported with 29% of species occurring in Antarctica only. In particular, asteroids from Tierra del Fuego showed affinities with those of West Antarctica at the species level, suggesting a recent mixing of assemblages. Biogeographic patterns are highly linked to reproductive strategy. Patterns also differ according to the taxonomic level, revealing the underlying role of historical factors.Main conclusionsPatterns of sea star biogeography are consistent with results obtained for other marine groups and are strongly linked to reproductive strategy
    corecore